"We always manage to hijack AI": ChatGPT's future parental control leaves skeptics

ChatGPT will introduce parental controls by the end of the month. Its parent company, OpenAI, announced this in a press release on Tuesday. In France, more than half of 15- to 24-year-olds have already used ChatGPT, according to a Médiamétrie study published last December.
Why? Because this artificial intelligence is the subject of a lawsuit: it is accused by an American family of having encouraged the suicide of their son, to whom he confided his unhappiness. As a result, ChatGPT is attempting to respond by offering parents control over their children's communications.
The American company will allow parents to link their account to their children's. This right of oversight should give them control over settings, but also alert them directly in the event of "acute distress" on the part of their child. If, for example, their offspring expresses profound discomfort by asking ChatGPT questions, the parents will be immediately notified.
But how can we effectively detect these questions? This is the question raised by Magali Germond, a digital ethics specialist: "Be careful, because the weakness of conversational AI is that we can always misuse it. There's always another way to ask the question, a way to retrieve information through subverted formulations. It's a first step, it's better than nothing, but it's not enough. Let's remain vigilant," she told RMC .
Tuesday's announcement "is really the bare minimum," Melodi Dincer, a lawyer who brought the case to court with the parents and an association, told AFP.

OpenAI said it is taking further steps, expected within the next 120 days. The company will redirect certain "sensitive conversations" to more advanced reasoning models like GPT-5-thinking. "The reasoning models more systematically follow and apply security guidelines," the American group said.
RMC